14 research outputs found

    Deep learning computer vision for robotic disassembly and servicing applications

    Get PDF
    Fastener detection is a necessary step for computer vision (CV) based robotic disassembly and servicing applications. Deep learning (DL) provides a robust approach for creating CV models capable of generalizing to diverse visual environments. Such DL CV systems rely on tuning input resolution and mini-batch size parameters to fit the needs of the detection application. This paper provides a method for determining the optimal compromise between input resolution and mini-batch size to determine the highest performance for cross-recessed screw (CRS) detection while utilizing maximum graphics processing unit resources. The Tiny-You Only Look Once v2 (Tiny-YOLO v2) DL object detection system was chosen to evaluate this method. Tiny-YOLO v2 was employed to solve the specialized task of detecting CRS which are highly common in electronic devices. The method used in this paper for CRS detection is meant to lay the ground-work for multi-class fastener detection, as the method is not dependent on the type or number of object classes. An original dataset of 900 images of 12.3 MPx resolution was manually collected and annotated for training. Three additional distinct datasets of 90 images each were manually collected and annotated for testing. It was found an input resolution of 1664 x 1664 pixels paired with a mini-batch size of 16 yielded the highest average precision (AP) among the seven models tested for all three testing datasets. This model scored an AP of 92.60% on the first testing dataset, 99.20% on the second testing dataset, and 98.39% on the third testing dataset

    A System Combining Force and Vision Sensing for Automated Screw Removal on Laptops

    No full text
    This brief investigates the performance of an automated robotic system, which uses a combination of vision and force sensing to remove screws from the back of laptops. This robotic system uses two webcams, one that is fixed over the robot and the other mounted on the robot, as well as a sensor-equipped (SE) screwdriver. Experimental studies were conducted to test the performance of the SE screwdriver and vision system. The parameters that were varied included the internal brightness settings on the webcams, the method in which the workspace was illuminated, and color of the laptop case. A localized light source and higher brightness setting as the laptop\u27s case became darker produced the best results. In this brief, the SE screwdriver was able to successfully remove 96.5% of the screws.Note to Practitioners-The amount of discarded electronic waste (e-waste) is increasing rapidly, yet efficient, nondestructive, automated methods to handle the waste have not been developed. Many e-waste products such as laptops use fasteners that need to be removed. In this brief, we focus on removing screws from laptops in a nondestructive manner in order to not damage the laptop, so its parts can be recycled. Due to the vast amounts of laptop models, it is necessary to create a method that will automatically recognize the locations of these fasteners. This brief presents a prototype robotic system that integrates force and vision sensing to automatically locate and remove screws from various models of laptops. The methodology presented in this brief is applicable to other e-waste products with a casing attached by screws. A current limitation of this brief is the robotic system that has to investigate all potential hole locations found by the vision system, although some of these locations may not correspond to valid screw locations. This brief can be extended to include a memory feature that will remember the locations of the screws for cases with similar laptops that are handled by the system to improve the processing time

    Using the Soar Cognitive Architecture to Remove Screws from Different Laptop Models

    No full text
    This paper investigates an approach that uses the cognitive architecture Soar to improve the performance of an automated robotic system, which uses a combination of vision and force sensing to remove screws from laptop cases. Soar\u27s long-term memory module, semantic memory, was used to remember pieces of information regarding laptop models and screw holes. The system was trained with multiple laptop models and the method in which Soar was used to facilitate the removal of screws was varied to determine the best performance of the system. In all the cases, Soar could determine the correct laptop model and in what orientation it was placed in the system. Soar was also used to remember what circle locations that were explored contained screws and what circles did not. Remembering the locations of the holes decreased a trial time by over 60%. The system performed the best when the number of training trials used to explore circle locations was limited, as this decreased the total trial time by over 10% for most of the laptop models and orientations. Note to Practitioners - Although the amount of discarded electronic waste in the world is rapidly increasing, efficient methods that can handle this in an automated non-destructive fashion have not been developed. Screws are a common fastener used on electronic products, such as laptops, and must be removed during nondestructive methods. In this paper, we focus on using the cognitive architecture Soar to facilitate the disassembly sequence of removing these screws from the back of laptops. Soar is able to differentiate between different models of laptops and store the locations of screws for these models leading to an improvement of the disassembly time when the same laptop model is used. Currently, this paper only uses one of Soar\u27s long-term memory modules (semantic memory) and a screwdriver tool. However, this paper can be extended to use multiple tools by using different features available in Soar such as other long-term memory modules and substates

    Characterization of Different Microsoft Kinect Sensor Models

    No full text
    This experimental study investigates the performance of three different models of the Microsoft Kinect sensor using the OpenNI driver from Primesense. The accuracy, repeatability, and resolution of the different Kinect models\u27 abilities to determine the distance to a planar target was explored. An ANOVA analysis was performed to determine if the model of the Kinect, the operating temperature, or their interaction were significant factors in the Kinect\u27s ability to determine the distance to the target. Different sized gauge blocks were also used to test how well a Kinect could reconstruct precise objects. Machinist blocks were used to examine how well the Kinect could reconstruct objects setup on an angle and determine the location of the center of a hole. All the Kinect models were able to determine the location of a target with a low standard deviation (\u3c2 mm). At close distances, the resolutions of all the Kinect models were 1 mm. Through the ANOVA analysis, the best performing Kinect at close distances was the Kinect model 1414, and at farther distances was the Kinect model 1473. The internal temperature of the Kinect sensor had an effect on the distance reported by the sensor. Using different correction factors, the Kinect was able to determine the volume of a gauge block and the angles machinist blocks were setup at, with under a 10% error

    Using the Soar Cognitive Architecture to Remove Screws From Different Laptop Models

    No full text

    A System Combining Force and Vision Sensing for Automated Screw Removal on Laptops

    No full text

    Characterization of Different Microsoft Kinect Sensor Models

    No full text

    Monitoring of sequentially loaded reagents to a detection area in a microfluidic chip

    No full text
    This paper presents a new scheme for monitoring the sequential loading of reagents to a detection site to conduct biological assays. The automation of the sequential loading of biological reagents to a detection site is monitored by a lens-less charged couple device (CCD). The fluid handling system employs a custom edge detection algorithm to determine the location of the channel edges. This custom edge detection method outperformed the built in Matlab edge detection functions. A parametric study was performed to determine the effects wavelength, height above a detection site, and voltage of a light emitting diode (LED), would have on the systems performance. A white light emitting diode, a voltage of 2.6V and a height of at least 20mm above the detection produced the best results. The fluid handling system was also able to work with channels that had different geometric designs and with different types of fluid. This research focuses for the first time on the monitoring of sequentially loading reagents to a detection site using a CCD imager that is also used for detection
    corecore